job interview
Meet the Palestinian Teens Trying to Win Robotics Gold
Next week, five teens from Palestine will head to Panama to compete in one of the world's largest youth robotics competitions. To win--and then teach STEM to their peers displaced by the Israel-Hamas war. For the entirety of the past year, as the teenage roboticists of Team Palestine have been working on their latest project, their homeland has been engulfed in Israel's war with Hamas . Earlier this month, that all changed. With a fragile ceasefire in place, Israeli forces began to pull back from parts of Gaza, and the teens put the final touches on the project they hope will bring them victory: a robot that can maneuver through a series of simulated challenges based on the effects of climate change.
- Asia > Middle East > Israel (0.89)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.25)
- North America > Panama (0.24)
- (14 more...)
Is fear contagious?
Fear isn't just personal--it spreads through sight, smell, and even subconsciously. Horror movies may be scarier in a crowded movie theater. Breakthroughs, discoveries, and DIY tips sent every weekday. We've all felt it: heart racing, palms sweating, stomach clenching--the iron grip of fear. Whether it's the sudden threat of an out-of-control vehicle or the nervous wait before a job interview, we all have felt fear's sudden grip.
- Leisure & Entertainment (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Media > Film (0.90)
People Are Using AI to Cheat in Job Interviews
People are sneaking answers from AI, and who can blame them? "I nterviews are NOT real anymore." So reads the opening caption of a TikTok posted in September, punctuated by the skull-and-crossbones emoji. She has a smartphone propped up against her laptop screen, so she can read off the responses that an AI app has composed for her: "Um, yeah, so, one of my key strengths is my adaptability." Getting generative artificial intelligence to whisper into your ear during a job interview certainly counts as .
Beyond Jailbreaking: Auditing Contextual Privacy in LLM Agents
Das, Saswat, Sandler, Jameson, Fioretto, Ferdinando
LLM agents have begun to appear as personal assistants, customer service bots, and clinical aides. While these applications deliver substantial operational benefits, they also require continuous access to sensitive data, which increases the likelihood of unauthorized disclosures. Moreover, these disclosures go beyond mere explicit disclosure, leaving open avenues for gradual manipulation or sidechannel information leakage. This study proposes an auditing framework for conversational privacy that quantifies an agent's susceptibility to these risks. The proposed Conversational Manipulation for Privacy Leakage (CMPL) framework is designed to stress-test agents that enforce strict privacy directives against an iterative probing strategy. Rather than focusing solely on a single disclosure event or purely explicit leakage, CMPL simulates realistic multi-turn interactions to systematically uncover latent vulnerabilities. Our evaluation on diverse domains, data modalities, and safety configurations demonstrates the auditing framework's ability to reveal privacy risks that are not deterred by existing single-turn defenses, along with an in-depth longitudinal study of the temporal dynamics of leakage, strategies adopted by adaptive adversaries, and the evolution of adversarial beliefs about sensitive targets. In addition to introducing CMPL as a diagnostic tool, the paper delivers (1) an auditing procedure grounded in quantifiable risk metrics and (2) an open benchmark for evaluation of conversational privacy across agent implementations.
- North America > United States > Virginia (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- North America > United States > Kansas (0.04)
- (8 more...)
- Research Report (1.00)
- Personal > Interview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- (14 more...)
Single- vs. Dual-Prompt Dialogue Generation with LLMs for Job Interviews in Human Resources
De Baer, Joachim, Doğruöz, A. Seza, Demeester, Thomas, Develder, Chris
Optimizing language models for use in conversational agents requires large quantities of example dialogues. Increasingly, these dialogues are synthetically generated by using powerful large language models (LLMs), especially in domains with challenges to obtain authentic human data. One such domain is human resources (HR). In this context, we compare two LLM-based dialogue generation methods for the use case of generating HR job interviews, and assess whether one method generates higher-quality dialogues that are more challenging to distinguish from genuine human discourse. The first method uses a single prompt to generate the complete interview dialog. The second method uses two agents that converse with each other. To evaluate dialogue quality under each method, we ask a judge LLM to determine whether AI was used for interview generation, using pairwise interview comparisons. We demonstrate that despite a sixfold increase in token cost, interviews generated with the dual-prompt method achieve a win rate up to ten times higher than those generated with the single-prompt method. This difference remains consistent regardless of whether GPT-4o or Llama 3.3 70B is used for either interview generation or judging quality.
- North America > Mexico > Mexico City > Mexico City (0.04)
- Europe > Belgium > Flanders > East Flanders > Ghent (0.04)
- Asia > Singapore (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Research Report (1.00)
- Personal > Interview (1.00)
STaR-GATE: Teaching Language Models to Ask Clarifying Questions
Andukuri, Chinmaya, Fränken, Jan-Philipp, Gerstenberg, Tobias, Goodman, Noah D.
When prompting language models to complete a task, users often leave important aspects unsaid. While asking questions could resolve this ambiguity (GATE; Li et al., 2023), models often struggle to ask good questions. We explore a language model's ability to self-improve (STaR; Zelikman et al., 2022) by rewarding the model for generating useful questions-a simple method we dub STaR-GATE. We generate a synthetic dataset of 25,500 unique persona-task prompts to simulate conversations between a pretrained language model-the Questioner-and a Roleplayer whose preferences are unknown to the Questioner. By asking questions, the Questioner elicits preferences from the Roleplayer. The Questioner is iteratively finetuned on questions that increase the probability of high-quality responses to the task, which are generated by an Oracle with access to the Roleplayer's latent preferences. After two iterations of self-improvement, the Questioner asks better questions, allowing it to generate responses that are preferred over responses from the initial model on 72% of tasks. Our results indicate that teaching a language model to ask better questions leads to better personalized responses.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Croatia (0.04)
- North America > United States > New York (0.04)
- (3 more...)
- Research Report > New Finding (0.66)
- Personal > Interview (0.46)
An Analysis of User Behaviors for Objectively Evaluating Spoken Dialogue Systems
Inoue, Koji, Lala, Divesh, Ochi, Keiko, Kawahara, Tatsuya, Skantze, Gabriel
Establishing evaluation schemes for spoken dialogue systems is important, but it can also be challenging. While subjective evaluations are commonly used in user experiments, objective evaluations are necessary for research comparison and reproducibility. To address this issue, we propose a framework for indirectly but objectively evaluating systems based on users' behaviors. In this paper, to this end, we investigate the relationship between user behaviors and subjective evaluation scores in social dialogue tasks: attentive listening, job interview, and first-meeting conversation. The results reveal that in dialogue tasks where user utterances are primary, such as attentive listening and job interview, indicators like the number of utterances and words play a significant role in evaluation. Observing disfluency also can indicate the effectiveness of formal tasks, such as job interview. On the other hand, in dialogue tasks with high interactivity, such as first-meeting conversation, behaviors related to turn-taking, like average switch pause length, become more important. These findings suggest that selecting appropriate user behaviors can provide valuable insights for objective evaluation in each social dialogue task.
New book exposes how 99% of Fortune 500 companies use the tech to 'watch' interviews and 'read' resumes to make hiring decisions without human oversight
The book, titled'The Algorithm', has pulled the current on how the hiring world is becoming a'Wild West' where unregulated AI algorithms make decisions without human oversight AI has taken over the job market by reading resumes and watching interviews to provide human executives with the best candidates, a new book has revealed. The book, titled'The Algorithm,' has pulled the curtain on how the hiring world is becoming a'Wild West' where unregulated AI algorithms make decisions without human oversight. Artificial intelligence decides who gets hired and who gets fired by monitoring everything from what people post on social media to their tone of voice in interviews, the book's author, Hilke Schellmann, told DailyMail.com. Algorithms can now dictate not only who gets job interviews - but, thanks to continuous on-the-job monitoring, who gets promoted or fired (and they might even warn your boss if you are getting divorced). Schellmann said the CEO of ZipRecruiter told him a few years ago that the tech was screening at least 75 percent of resumes.
- Health & Medicine (0.48)
- Law (0.48)
- Information Technology > Security & Privacy (0.48)
Want to impress your boss? Praise your colleagues (and yourself)! Scientists claim 'dual promotion' is the key to seeming competent at work
In the tough world of work we all need to do a little self-promotion now and then. But there's a tough balance to be struck between making our accomplishments known without coming across as unlikeable. Now a study has found the answer: highlight your work-mates' achievements at the same time as you shine a light on your own. Researchers say this'dual promotion' tactic is the perfect way to make sure we are perceived as competent while still radiating'warmth'. 'We show that by simultaneously other-promoting - describing accomplishments and qualities of others - and self-promoting - describing one's own accomplishments and qualities - individuals can project both warmth and competence,' said the researchers.
- North America > United States > Pennsylvania (0.05)
- Europe > United Kingdom (0.05)